Robust Multiple-Path Orienteering Problem: Securing Against Adversarial Attacks

نویسندگان

چکیده

The multiple-path orienteering problem asks for paths a team of robots that maximize the total reward collected while satisfying budget constraints on path length. This models many multirobot routing tasks, such as exploring unknown environments and information gathering environmental monitoring. In this article, we focus how to make robot robust failures when operating in adversarial environments. We introduce (RMOP), where seek worst case guarantees against an adversary is capable attacking at most $\alpha$ robots. consider two versions problem: RMOP offline online. version, there no communication or replanning execute their plans, our main contribution general approximation scheme with bounded guarantee depends factor single-robot orienteering. particular, show algorithm yields a: 1) constant-factor cost function modular; 2) notation="LaTeX">$\log$ submodular; 3) submodular, but are allowed exceed budgets by amount. online modeled two-player sequential game solved adaptively receding horizon fashion based Monte Carlo tree search. addition theoretical analysis, perform simulation studies ocean monitoring tunnel information-gathering applications demonstrate efficacy approach.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Securing Against Insider Attacks

e are all creatures of habit; the way we think and the views we take are conditioned by our education, society as a whole, and, at a much deeper level, our cultural memories or instinct. It is sometimes surprising how much the past can unconsciously affect today’s thinking. George Santayana famously observed, “Those who cannot remember the past are condemned to repeat it.” But when it comes to ...

متن کامل

Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks

Machine learning systems based on deep neural networks, being able to produce state-of-the-art results on various perception tasks, have gained mainstream adoption in many applications. However, they are shown to be vulnerable to adversarial example attack, which generates malicious output by adding slight perturbations to the input. Previous adversarial example crafting methods, however, use s...

متن کامل

Robust Watermarking Scheme Against Multiple Attacks

A good watermarking technique helps in protecting the copy right of the image which is the motivational factor in developing new encryption techniques .The present paper found a novel fact that by inserting the watermark using Least Significant Bit (LSB into three components of the image namely RED GREEN and BLUE. The watermark is a binary image ,embedded into host image by altering LSB values ...

متن کامل

MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples

MagNet and “Efficient Defenses...” were recently proposed as a defense to adversarial examples. We find that we can construct adversarial examples that defeat these defenses with only a slight increase in distortion.

متن کامل

Securing Deep Neural Nets against Adversarial Attacks with Moving Target Defense

Deep Neural Networks (DNNs) are presently the state-of-the-art for image classification tasks. However, recent works have shown that these systems can be easily fooled to misidentify images by modifying the image in particular ways, often rendering them practically useless. Moreover, defense mechanisms proposed in the literature so far are mostly attack-specific and prove to be ineffective agai...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Robotics

سال: 2023

ISSN: ['1552-3098', '1941-0468', '1546-1904']

DOI: https://doi.org/10.1109/tro.2022.3232268